Concept bottleneck models (CBMs) (Koh et al. 2020) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions.We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts. We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction. We demonstrate thata simple policy combining concept prediction uncertainty and influence of the concept on the final prediction achieves strong performance and outperforms a static approach proposed in Koh et al. (2020) as well as active feature acquisition methods proposed in the literature. We show that the interactiveCBM can achieve accuracy gains of 5-10% with only 5 interactions over competitive baselines on the Caltech-UCSDBirds, CheXpert and OAI datasets.
translated by 谷歌翻译
可靠的异常检测对于深度学习模型的现实应用至关重要。深层生成模型产生的可能性虽然进行了广泛的研究,但仍被认为是对异常检测的不切实际的。一方面,深层生成模型的可能性很容易被低级输入统计数据偏差。其次,许多用于纠正这些偏见的解决方案在计算上是昂贵的,或者对复杂的天然数据集的推广不佳。在这里,我们使用最先进的深度自回归模型探索离群值检测:PixelCNN ++。我们表明,PixelCNN ++的偏见主要来自基于局部依赖性的预测。我们提出了两个我们称为“震动”和“搅拌”的徒转化家族,它们可以改善低水平的偏见并隔离长期依赖性对PixelCNN ++可能性的贡献。这些转换在计算上是便宜的,并且在评估时很容易应用。我们使用五个灰度和六个自然图像数据集对我们的方法进行了广泛的评估,并表明它们达到或超过了最新的离群检测性能。总而言之,轻巧的补救措施足以在具有深层生成模型的图像上实现强大的离群检测。
translated by 谷歌翻译
分层增强学习中的选项框架将整体目标分解为选项或更简单的任务和相关策略的组合,从而可以在动作领域进行抽象。理想情况下,可以在不同的高级目标中重复使用这些选择;确实,这种重复使用对于实现可以有效利用其先前经验的持续学习代理的愿景是必要的。先前的方法仅提出了将预科选项转移到新任务设置的有限形式。我们提出了一种新颖的选项索引方法,用于分层学习(OI-HRL),在该方法中,我们学习选项与环境中存在的项目之间的亲和力功能。这使我们能够通过将目标指导的学习仅限于与手头的任务相关的那些选项,在测试时间零弹性概括中有效地重用大量的经过预告片的选项库。我们开发了一个元训练循环,该循环通过结合有关检索期权与高级目标的相关性的反馈来了解一系列HRL问题的选项和环境的表示。我们在两个模拟设置中评估了OI -HRL -Craftworld和AI2THOR环境 - 并表明我们与Oracular Baseline达到了性能竞争,并且比基线的实质性取得了可观的增长,该基线具有可用于学习层次结构策略的整个选项库。
translated by 谷歌翻译
当用离群数据与培训分布相去甚远,深层网络通常会充满信心,但仍有不正确的预测。由深生成模型(DGM)计算出的可能性是使用未标记数据的异常检测的候选指标。然而,以前的研究表明,DGM的可能性是不可靠的,可以通过简单转换对输入数据很容易偏见。在这里,我们在最简单的DGM中检查了使用变异自动编码器(VAE)(VAE)的离群值检测。我们提出了新型的分析和算法方法,以减轻VAE可能性的关键偏见。我们的偏差校正是特定于样本的,计算便宜的,并且很容易针对各种解码器可见分布进行计算。接下来,我们表明,众所周知的图像预处理技术(对比拉伸)扩展了偏置校正的有效性,以进一步改善异常检测。我们的方法通过九个灰度和自然图像数据集实现了最先进的精度,并在最近的四种竞争方法中表现出了显着的优势 - 无论是速度和性能而言,都具有显着的优势。总而言之,轻巧的补救措施足以通过VAE实现强大的离群值检测。
translated by 谷歌翻译
Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
Voice assistants are deployed widely and provide useful functionality. However, recent work has shown that commercial systems like Amazon Alexa and Google Home are vulnerable to voice-based confusion attacks that exploit design issues. We propose a systems-oriented defense against this class of attacks and demonstrate its functionality for Amazon Alexa. We ensure that only the skills a user intends execute in response to voice commands. Our key insight is that we can interpret a user's intentions by analyzing their activity on counterpart systems of the web and smartphones. For example, the Lyft ride-sharing Alexa skill has an Android app and a website. Our work shows how information from counterpart apps can help reduce dis-ambiguities in the skill invocation process. We build SkilIFence, a browser extension that existing voice assistant users can install to ensure that only legitimate skills run in response to their commands. Using real user data from MTurk (N = 116) and experimental trials involving synthetic and organic speech, we show that SkillFence provides a balance between usability and security by securing 90.83% of skills that a user will need with a False acceptance rate of 19.83%.
translated by 谷歌翻译
Power grids, across the world, play an important societal and economical role by providing uninterrupted, reliable and transient-free power to several industries, businesses and household consumers. With the advent of renewable power resources and EVs resulting into uncertain generation and highly dynamic load demands, it has become ever so important to ensure robust operation of power networks through suitable management of transient stability issues and localize the events of blackouts. In the light of ever increasing stress on the modern grid infrastructure and the grid operators, this paper presents a reinforcement learning (RL) framework, PowRL, to mitigate the effects of unexpected network events, as well as reliably maintain electricity everywhere on the network at all times. The PowRL leverages a novel heuristic for overload management, along with the RL-guided decision making on optimal topology selection to ensure that the grid is operated safely and reliably (with no overloads). PowRL is benchmarked on a variety of competition datasets hosted by the L2RPN (Learning to Run a Power Network). Even with its reduced action space, PowRL tops the leaderboard in the L2RPN NeurIPS 2020 challenge (Robustness track) at an aggregate level, while also being the top performing agent in the L2RPN WCCI 2020 challenge. Moreover, detailed analysis depicts state-of-the-art performances by the PowRL agent in some of the test scenarios.
translated by 谷歌翻译
Abusive language is a concerning problem in online social media. Past research on detecting abusive language covers different platforms, languages, demographies, etc. However, models trained using these datasets do not perform well in cross-domain evaluation settings. To overcome this, a common strategy is to use a few samples from the target domain to train models to get better performance in that domain (cross-domain few-shot training). However, this might cause the models to overfit the artefacts of those samples. A compelling solution could be to guide the models toward rationales, i.e., spans of text that justify the text's label. This method has been found to improve model performance in the in-domain setting across various NLP tasks. In this paper, we propose RAFT (Rationale Adaptor for Few-shoT classification) for abusive language detection. We first build a multitask learning setup to jointly learn rationales, targets, and labels, and find a significant improvement of 6% macro F1 on the rationale detection task over training solely rationale classifiers. We introduce two rationale-integrated BERT-based architectures (the RAFT models) and evaluate our systems over five different abusive language datasets, finding that in the few-shot classification setting, RAFT-based models outperform baseline models by about 7% in macro F1 scores and perform competitively to models finetuned on other source domains. Furthermore, RAFT-based models outperform LIME/SHAP-based approaches in terms of plausibility and are close in performance in terms of faithfulness.
translated by 谷歌翻译
Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc. In observational studies, de-confounding is a fundamental problem of individualised treatment effects (ITE) estimation. This paper proposes disentangled representations with adversarial training to selectively balance the confounders in the binary treatment setting for the ITE estimation. The adversarial training of treatment policy selectively encourages treatment-agnostic balanced representations for the confounders and helps to estimate the ITE in the observational studies via counterfactual inference. Empirical results on synthetic and real-world datasets, with varying degrees of confounding, prove that our proposed approach improves the state-of-the-art methods in achieving lower error in the ITE estimation.
translated by 谷歌翻译
原则上,将变异自动编码器(VAE)应用于顺序数据提供了一种用于控制序列生成,操纵和结构化表示学习的方法。但是,训练序列VAE具有挑战性:自回归解码器通常可以解释数据而无需使用潜在空间,即后置倒塌。为了减轻这种情况,最新的模型通过将均匀的随机辍学量应用于解码器输入来削弱强大的解码器。从理论上讲,我们表明,这可以消除解码器输入提供的点式互信息,该信息通过利用潜在空间来补偿。然后,我们提出了一种对抗性训练策略,以实现基于信息的随机辍学。与标准文本基准数据集上的均匀辍学相比,我们的目标方法同时提高了序列建模性能和潜在空间中捕获的信息。
translated by 谷歌翻译